github.com/hyperion-hyn/go-ethereum@v2.4.0+incompatible/docs/Getting Started/Creating-A-Network-From-Scratch.md (about)

     1  # Creating a network from scratch
     2  
     3  This section details easy to follow step by step instructions of how to setup one or more Quorum nodes from scratch for all new starters.
     4  
     5  Let's go through step by step instructions to setup a Quorum node with Raft consensus.
     6  
     7  ## Quorum with Raft consensus
     8  1. On each machine build Quorum as described in the [Installing](../Installing) section. Ensure that PATH contains geth and bootnode
     9      ```
    10      $ git clone https://github.com/jpmorganchase/quorum.git
    11      $ cd quorum
    12      $ make all
    13      $ export PATH=$(pwd)/build/bin:$PATH
    14      ```
    15  
    16  2. Create a working directory which will be the base for the new node(s) and change into it
    17      ```
    18      $ mkdir fromscratch
    19      $ cd fromscratch
    20      $ mkdir new-node-1
    21      ```
    22  3. Generate one or more accounts for this node and take down the account address. A funded account may be required depending what you are trying to accomplish
    23      ```
    24      $ geth --datadir new-node-1 account new
    25      
    26       INFO [06-07|14:52:18.742] Maximum peer count                       ETH=25 LES=0 total=25
    27       Your new account is locked with a password. Please give a password. Do not forget this password.
    28       Passphrase: 
    29       Repeat passphrase: 
    30       Address: {679fed8f4f3ea421689136b25073c6da7973418f}
    31       
    32       Please note the keystore file generated inside new-node-1 includes the address in the last part of its filename.
    33       
    34       $ ls new-node-1/keystore
    35       UTC--2019-06-17T09-29-06.665107000Z--679fed8f4f3ea421689136b25073c6da7973418f
    36      ```
    37  
    38      !!! note 
    39          You could generate multiple accounts for a single node, or any number of accounts for additional nodes and pre-allocate them with funds in the genesis.json file (see below)
    40         
    41  4. Create a `genesis.json` file see example [here](../genesis). The `alloc` field should be pre-populated with the account you generated at previous step
    42      ```
    43      $ vim genesis.json
    44      ... alloc holds 'optional' accounts with a pre-funded amounts. In this example we are funding the accounts 679fed8f4f3ea421689136b25073c6da7973418f (generated from the step above) and c5c7b431e1629fb992eb18a79559f667228cd055.
    45      {
    46        "alloc": {
    47          "0x679fed8f4f3ea421689136b25073c6da7973418f": {
    48            "balance": "1000000000000000000000000000"
    49          },
    50         "0xc5c7b431e1629fb992eb18a79559f667228cd055": {
    51            "balance": "2000000000000000000000000000"
    52          }
    53      },
    54       "coinbase": "0x0000000000000000000000000000000000000000",
    55       "config": {
    56         "homesteadBlock": 0,
    57         "byzantiumBlock": 0,
    58         "chainId": 10,
    59         "eip150Block": 0,
    60         "eip155Block": 0,
    61         "eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
    62         "eip158Block": 0,
    63         "isQuorum": true
    64        },
    65       "difficulty": "0x0",
    66       "extraData": "0x0000000000000000000000000000000000000000000000000000000000000000",
    67       "gasLimit": "0xE0000000",
    68       "mixhash": "0x00000000000000000000000000000000000000647572616c65787365646c6578",
    69       "nonce": "0x0",
    70       "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
    71       "timestamp": "0x00"
    72      }   
    73      ```
    74  5. Generate node key and copy it into datadir
    75      ```
    76      $ bootnode --genkey=nodekey
    77      $ cp nodekey new-node-1/
    78      ```
    79  6. Execute below command to display enode id of the new node
    80      ```
    81      $ bootnode --nodekey=new-node-1/nodekey --writeaddress > new-node-1/enode
    82      $ cat new-node-1/enode  
    83      '70399c3d1654c959a02b73acbdd4770109e39573a27a9b52bd391e5f79b91a42d8f2b9e982959402a97d2cbcb5656d778ba8661ec97909abc72e7bb04392ebd8'
    84      ```
    85  7. Create a file called `static-nodes.json` and edit it to match this [example](../permissioned-nodes). Your file should contain a single line for your node with your enode's id and the ports you are going to use for devp2p and raft. Ensure that this file is in your nodes data directory
    86      ```
    87      $ vim static-nodes.json
    88      .... paste below lines with enode generated in previous step, port 21000;IP 127.0.0.1 and raft port set as 50000
    89      [
    90        "enode://70399c3d1654c959a02b73acbdd4770109e39573a27a9b52bd391e5f79b91a42d8f2b9e982959402a97d2cbcb5656d778ba8661ec97909abc72e7bb04392ebd8@127.0.0.1:21000?discport=0&raftport=50000"
    91      ] 
    92      $ cp static-nodes.json new-node-1
    93      ```
    94  8. Initialize new node with below command.
    95      ```
    96      $ geth --datadir new-node-1 init genesis.json
    97      
    98      INFO [06-07|15:45:17.508] Maximum peer count                       ETH=25 LES=0 total=25
    99      INFO [06-07|15:45:17.516] Allocated cache and file handles         database=/Users/krish/new-node-1/geth/chaindata cache=16 handles=16
   100      INFO [06-07|15:45:17.524] Writing custom genesis block 
   101      INFO [06-07|15:45:17.524] Persisted trie from memory database      nodes=1 size=152.00B time=75.344µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   102      INFO [06-07|15:45:17.525] Successfully wrote genesis state         database=chaindata                              hash=ec0542…9665bf
   103      INFO [06-07|15:45:17.525] Allocated cache and file handles         database=/Users/krish/new-node-1/geth/lightchaindata cache=16 handles=16
   104      INFO [06-07|15:45:17.527] Writing custom genesis block 
   105      INFO [06-07|15:45:17.527] Persisted trie from memory database      nodes=1 size=152.00B time=60.76µs  gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   106      INFO [06-07|15:45:17.527] Successfully wrote genesis state         database=lightchaindata                              hash=ec0542…9665bf
   107      ```
   108  9. Start your node by first creating a script as below and then starting it:
   109      ```
   110      $ vim startnode1.sh
   111      ... paste below commands. It will start it in the background.
   112      #!/bin/bash
   113      PRIVATE_CONFIG=ignore nohup geth --datadir new-node-1 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50000 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,raft --emitcheckpoints --port 21000 >> node.log 2>&1 &
   114  
   115      $ chmod +x startnode1.sh 
   116      $ ./startnode1.sh
   117      ``` 
   118      
   119      !!! note
   120          This configuration starts Quorum without privacy support as could be evidenced in prefix `PRIVATE_CONFIG=ignore`, please see below sections on [how to enable privacy with privacy transaction managers](#adding-privacy-transaction-manager).
   121  
   122      Your node is now operational and you may attach to it with below commands. 
   123      ```
   124      $ geth attach new-node-1/geth.ipc
   125      Welcome to the Geth JavaScript console!
   126      
   127      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   128      coinbase: 0xedf53f5bf40c99f48df184441137659aed899c48
   129      at block: 0 (Thu, 01 Jan 1970 01:00:00 BST)
   130       datadir: /Users/krish/fromscratch/new-node-1
   131       modules: admin:1.0 debug:1.0 eth:1.0 ethash:1.0 miner:1.0 net:1.0 personal:1.0 raft:1.0 rpc:1.0 txpool:1.0 web3:1.0
   132      
   133      > raft.cluster
   134      [{
   135          ip: "127.0.0.1",
   136          nodeId: "a5596803caebdc9c5326e1a0775563ad8e4aa14aa3530f0ae16d3fd8d7e48bc0b81342064e22094ab5d10303ab5721650af561f2bcdc54d705f8b6a8c07c94c3",
   137          p2pPort: 21000,
   138          raftId: 1,
   139          raftPort: 50000
   140      }]
   141      > raft.leader
   142      "a5596803caebdc9c5326e1a0775563ad8e4aa14aa3530f0ae16d3fd8d7e48bc0b81342064e22094ab5d10303ab5721650af561f2bcdc54d705f8b6a8c07c94c3"
   143      > raft.role
   144      "minter"
   145      > 
   146      > exit
   147      ```
   148  
   149  ### Adding additional node
   150  1. Complete steps 1, 2, 5, and 6 from the previous guide
   151  
   152      ```
   153      $ mkdir new-node-2
   154      $ bootnode --genkey=nodekey2
   155      $ cp nodekey2 new-node-2/nodekey
   156      $ bootnode --nodekey=new-node-2/nodekey --writeaddress
   157      56e81550db3ccbfb5eb69c0cfe3f4a7135c931a1bae79ea69a1a1c6092cdcbea4c76a556c3af977756f95d8bf9d7b38ab50ae070da390d3abb3d7e773099c1a9
   158      ```
   159  2. Retrieve current chains `genesis.json` and `static-nodes.json`. `static-nodes.json` should be placed into new nodes data dir
   160      ```
   161      $ cp static-nodes.json new-node-2    
   162      ```
   163  
   164  3. Edit `static-nodes.json` and add new entry for the new node you are configuring (should be last)
   165     ```
   166     $ vim new-node-2/static-nodes.json 
   167     .... append new-node-2's enode generated in step 1, port 21001;IP 127.0.0.1 and raft port set as 50001
   168  
   169     [
   170       "enode://70399c3d1654c959a02b73acbdd4770109e39573a27a9b52bd391e5f79b91a42d8f2b9e982959402a97d2cbcb5656d778ba8661ec97909abc72e7bb04392ebd8@127.0.0.1:21000?discport=0&raftport=50000",
   171       "enode://56e81550db3ccbfb5eb69c0cfe3f4a7135c931a1bae79ea69a1a1c6092cdcbea4c76a556c3af977756f95d8bf9d7b38ab50ae070da390d3abb3d7e773099c1a9@127.0.0.1:21001?discport=0&raftport=50001"
   172     ]
   173     ```  
   174      
   175  4. Initialize new node as given below: 
   176  
   177      ```
   178      $ geth --datadir new-node-2 init genesis.json
   179      
   180      INFO [06-07|16:34:39.805] Maximum peer count                       ETH=25 LES=0 total=25
   181      INFO [06-07|16:34:39.814] Allocated cache and file handles         database=/Users/krish/fromscratch/new-node-2/geth/chaindata cache=16 handles=16
   182      INFO [06-07|16:34:39.816] Writing custom genesis block 
   183      INFO [06-07|16:34:39.817] Persisted trie from memory database      nodes=1 size=152.00B time=59.548µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   184      INFO [06-07|16:34:39.817] Successfully wrote genesis state         database=chaindata                                          hash=f02d0b…ed214a
   185      INFO [06-07|16:34:39.817] Allocated cache and file handles         database=/Users/krish/fromscratch/new-node-2/geth/lightchaindata cache=16 handles=16
   186      INFO [06-07|16:34:39.819] Writing custom genesis block 
   187      INFO [06-07|16:34:39.819] Persisted trie from memory database      nodes=1 size=152.00B time=43.733µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   188      INFO [06-07|16:34:39.819] Successfully wrote genesis state         database=lightchaindata                                          hash=f02d0b…ed214a
   189      ```
   190      
   191  5. Connect to an already running node of the chain and execute raft `addPeer` command.
   192      ```
   193      $ geth attach new-node-1/geth.ipc
   194      Welcome to the Geth JavaScript console!
   195      
   196      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   197      coinbase: 0xedf53f5bf40c99f48df184441137659aed899c48
   198      at block: 0 (Thu, 01 Jan 1970 01:00:00 BST)
   199       datadir: /Users/krish/fromscratch/new-node-1
   200       modules: admin:1.0 debug:1.0 eth:1.0 ethash:1.0 miner:1.0 net:1.0 personal:1.0 raft:1.0 rpc:1.0 txpool:1.0 web3:1.0
   201      
   202      > raft.addPeer('enode://56e81550db3ccbfb5eb69c0cfe3f4a7135c931a1bae79ea69a1a1c6092cdcbea4c76a556c3af977756f95d8bf9d7b38ab50ae070da390d3abb3d7e773099c1a9@127.0.0.1:21001?discport=0&raftport=50001')
   203      2
   204      > raft.cluster
   205      [{
   206          ip: "127.0.0.1",
   207          nodeId: "56e81550db3ccbfb5eb69c0cfe3f4a7135c931a1bae79ea69a1a1c6092cdcbea4c76a556c3af977756f95d8bf9d7b38ab50ae070da390d3abb3d7e773099c1a9",
   208          p2pPort: 21001,
   209          raftId: 2,
   210          raftPort: 50001
   211      }, {
   212          ip: "127.0.0.1",
   213          nodeId: "70399c3d1654c959a02b73acbdd4770109e39573a27a9b52bd391e5f79b91a42d8f2b9e982959402a97d2cbcb5656d778ba8661ec97909abc72e7bb04392ebd8",
   214          p2pPort: 21000,
   215          raftId: 1,
   216          raftPort: 50000
   217      }]
   218      > exit
   219      ```
   220  6. Start your node by first creating a script as previous step and changing the ports you are going to use for Devp2p and raft.
   221      ```
   222      $ cp startnode1.sh startnode2.sh
   223      $ vim startnode2.sh
   224      ..... paste below details
   225      #!/bin/bash
   226      PRIVATE_CONFIG=ignore nohup geth --datadir new-node-2 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50001 --raftjoinexisting 2 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,raft --emitcheckpoints --port 21001 2>>node2.log &
   227      
   228      $ ./startnode2.sh 
   229      ```
   230  
   231  7. Optional: share new `static-nodes.json` with all other chain participants
   232      ```
   233      $ cp new-node-2/static-nodes.json new-node-1
   234      ```
   235  
   236      Your additional node is now operational and is part of the same chain as the previously set up node.
   237  
   238  ### Removing node
   239  1. Connect to an already running node of the chain and execute `raft.cluster` and get the `RAFT_ID` corresponding to the node that needs to be removed
   240  2. Run `raft.removePeer(RAFT_ID)` 
   241  3. Stop the `geth` process corresponding to the node that was removed.
   242      ```
   243      $ geth attach new-node-1/geth.ipc
   244      Welcome to the Geth JavaScript console!
   245      
   246      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   247      coinbase: 0xedf53f5bf40c99f48df184441137659aed899c48
   248      at block: 0 (Thu, 01 Jan 1970 01:00:00 BST)
   249       datadir: /Users/krish/fromscratch/new-node-1
   250       modules: admin:1.0 debug:1.0 eth:1.0 ethash:1.0 miner:1.0 net:1.0 personal:1.0 raft:1.0 rpc:1.0 txpool:1.0 web3:1.0
   251      
   252      > raft.cluster
   253      [{
   254          ip: "127.0.0.1",
   255          nodeId: "56e81550db3ccbfb5eb69c0cfe3f4a7135c931a1bae79ea69a1a1c6092cdcbea4c76a556c3af977756f95d8bf9d7b38ab50ae070da390d3abb3d7e773099c1a9",
   256          p2pPort: 21001,
   257          raftId: 2,
   258          raftPort: 50001
   259      }, {
   260          ip: "127.0.0.1",
   261          nodeId: "a5596803caebdc9c5326e1a0775563ad8e4aa14aa3530f0ae16d3fd8d7e48bc0b81342064e22094ab5d10303ab5721650af561f2bcdc54d705f8b6a8c07c94c3",
   262          p2pPort: 21000,
   263          raftId: 1,
   264          raftPort: 50000
   265      }]
   266      > 
   267      > raft.removePeer(2)
   268      null
   269      > raft.cluster
   270      [{
   271          ip: "127.0.0.1",
   272          nodeId: "a5596803caebdc9c5326e1a0775563ad8e4aa14aa3530f0ae16d3fd8d7e48bc0b81342064e22094ab5d10303ab5721650af561f2bcdc54d705f8b6a8c07c94c3",
   273          p2pPort: 21000,
   274          raftId: 1,
   275          raftPort: 50000
   276      }]
   277      > exit
   278      $
   279      $
   280      $ ps | grep geth
   281        PID TTY           TIME CMD
   282      10554 ttys000    0:00.01 -bash
   283       9125 ttys002    0:00.50 -bash
   284      10695 ttys002    0:31.42 geth --datadir new-node-1 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50000 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,db,eth,debug,miner,net,shh,txpoo
   285      10750 ttys002    0:01.94 geth --datadir new-node-2 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50001 --raftjoinexisting 2 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debu
   286      $ kill 10750
   287      $
   288      $
   289      $ ps
   290        PID TTY           TIME CMD
   291      10554 ttys000    0:00.01 -bash
   292       9125 ttys002    0:00.51 -bash
   293      10695 ttys002    0:31.76 geth --datadir new-node-1 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50000 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,db,eth,debug,miner,net,shh,txpoo
   294      ```
   295  
   296  
   297  
   298  ## Quorum with Istanbul BFT consensus
   299  1. On each machine build Quorum as described in the [Installing](../Installing) section. Ensure that PATH contains geth and boot node
   300      ```
   301      $ git clone https://github.com/jpmorganchase/quorum.git
   302      $ cd quorum
   303      $ make all
   304      $ export PATH=$(pwd)/build/bin:$PATH
   305      ```
   306  2. Install [istanbul-tools](https://github.com/jpmorganchase/istanbul-tools)
   307      ```
   308      $ mkdir fromscratchistanbul
   309      $ cd fromscratchistanbul
   310      $ git clone https://github.com/jpmorganchase/istanbul-tools.git
   311      $ cd istanbul-tools
   312      $ make
   313      ```    
   314  3. Create a working directory for each of the X number of initial validator nodes
   315      ```
   316      $ mkdir node0 node1 node2 node3 node4
   317      ```
   318  4. Change into the lead (whichever one you consider first) node's working directory and generate the setup files for X initial validator nodes by executing `istanbul setup --num X --nodes --quorum --save --verbose` **only execute this instruction once, i.e. not X times**. This command will generate several items of interest: `static-nodes.json`, `genesis.json`, and nodekeys for all the initial validator nodes which will sit in numbered directories from 0 to X-1
   319      ```
   320      $ cd node0
   321      $ ../istanbul-tools/build/bin/istanbul setup --num 5 --nodes --quorum --save --verbose
   322      validators
   323      {
   324      	"Address": "0x4c1ccd426833b9782729a212c857f2f03b7b4c0d",
   325      	"Nodekey": "fe2725c4e8f7617764b845e8d939a65c664e7956eb47ed7d934573f16488efc1",
   326      	"NodeInfo": "enode://dd333ec28f0a8910c92eb4d336461eea1c20803eed9cf2c056557f986e720f8e693605bba2f4e8f289b1162e5ac7c80c914c7178130711e393ca76abc1d92f57@0.0.0.0:30303?discport=0"
   327      }
   328      {
   329      	"Address": "0x189d23d201b03ae1cf9113672df29a5d672aefa3",
   330      	"Nodekey": "3434f9efd184f2255f8acc9f4408a5068bd5ae920548044087578ab97ef22f3a",
   331      	"NodeInfo": "enode://1bb6be462f27e56f901c3fcb2d53a9273565f48e5d354c08f0c044405b29291b405b9f5aa027f3a75f9b058cb43e2f54719f15316979a0e5a2b760fff4631998@0.0.0.0:30303?discport=0"
   332      }
   333      {
   334      	"Address": "0x44b07d2c28b8ed8f02b45bd84ac7d9051b3349e6",
   335      	"Nodekey": "8183051c9976200d245c59a80ae004f20c3f66e1aa1b8f17458931de91576e05",
   336      	"NodeInfo": "enode://0df02e94a3befc0683780d898119d3b675e5942c1a2f9ad47d35b4e6ccaf395cd71ec089fcf1d616748bf9871f91e5e3d29c1cf6f8f81de1b279082a104f619d@0.0.0.0:30303?discport=0"
   337      }
   338      {
   339      	"Address": "0xc1056df7c02b6f1a353052eaf0533cc7cb743b52",
   340      	"Nodekey": "de415c5dbbb9ff0a34dbd3bf871ee41b230f431925e1f4cc1dd225ef47cc066f",
   341      	"NodeInfo": "enode://3fe0ff0dd2730eaac7b6b379bdb51215b5831f4f48fa54a24a0298ad5ba8c2a332442948d53f4cd4fd28f373089a35e806ef722eb045659910f96a1278120516@0.0.0.0:30303?discport=0"
   342      }
   343      {
   344      	"Address": "0x7ae555d0f6faad7930434abdaac2274fd86ab516",
   345      	"Nodekey": "768b87473ba96fcfa272f958fc95a3cefdf9aa82110cde6f2f34aa5855eb39db",
   346      	"NodeInfo": "enode://e53e92e5a51ac2685b0406d0d3c62288b53831c3b0f492b9dc4bc40334783702cfa74c49b836efa2761edde33a3282704273b2453537b855e7a4aeadcccdb43e@0.0.0.0:30303?discport=0"
   347      }
   348      
   349          
   350      static-nodes.json
   351      [
   352      	"enode://dd333ec28f0a8910c92eb4d336461eea1c20803eed9cf2c056557f986e720f8e693605bba2f4e8f289b1162e5ac7c80c914c7178130711e393ca76abc1d92f57@0.0.0.0:30303?discport=0",
   353      	"enode://1bb6be462f27e56f901c3fcb2d53a9273565f48e5d354c08f0c044405b29291b405b9f5aa027f3a75f9b058cb43e2f54719f15316979a0e5a2b760fff4631998@0.0.0.0:30303?discport=0",
   354      	"enode://0df02e94a3befc0683780d898119d3b675e5942c1a2f9ad47d35b4e6ccaf395cd71ec089fcf1d616748bf9871f91e5e3d29c1cf6f8f81de1b279082a104f619d@0.0.0.0:30303?discport=0",
   355      	"enode://3fe0ff0dd2730eaac7b6b379bdb51215b5831f4f48fa54a24a0298ad5ba8c2a332442948d53f4cd4fd28f373089a35e806ef722eb045659910f96a1278120516@0.0.0.0:30303?discport=0",
   356      	"enode://e53e92e5a51ac2685b0406d0d3c62288b53831c3b0f492b9dc4bc40334783702cfa74c49b836efa2761edde33a3282704273b2453537b855e7a4aeadcccdb43e@0.0.0.0:30303?discport=0"
   357      ]
   358      
   359      
   360      
   361      genesis.json
   362      {
   363          "config": {
   364              "chainId": 10,
   365              "eip150Block": 1,
   366              "eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
   367              "eip155Block": 1,
   368              "eip158Block": 1,
   369              "byzantiumBlock": 1,
   370              "istanbul": {
   371                  "epoch": 30000,
   372                  "policy": 0
   373              },
   374              "isQuorum": true
   375          },
   376          "nonce": "0x0",
   377          "timestamp": "0x5cffc201",
   378          "extraData": "0x0000000000000000000000000000000000000000000000000000000000000000f8aff869944c1ccd426833b9782729a212c857f2f03b7b4c0d94189d23d201b03ae1cf9113672df29a5d672aefa39444b07d2c28b8ed8f02b45bd84ac7d9051b3349e694c1056df7c02b6f1a353052eaf0533cc7cb743b52947ae555d0f6faad7930434abdaac2274fd86ab516b8410000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c0",
   379          "gasLimit": "0xe0000000",
   380          "difficulty": "0x1",
   381          "mixHash": "0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365",
   382          "coinbase": "0x0000000000000000000000000000000000000000",
   383          "alloc": {
   384              "189d23d201b03ae1cf9113672df29a5d672aefa3": {
   385                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   386              },
   387              "44b07d2c28b8ed8f02b45bd84ac7d9051b3349e6": {
   388                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   389              },
   390              "4c1ccd426833b9782729a212c857f2f03b7b4c0d": {
   391                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   392              },
   393              "7ae555d0f6faad7930434abdaac2274fd86ab516": {
   394                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   395              },
   396              "c1056df7c02b6f1a353052eaf0533cc7cb743b52": {
   397                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   398              }
   399          },
   400          "number": "0x0",
   401          "gasUsed": "0x0",
   402          "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
   403      }
   404  
   405      $ ls -l
   406       total 16
   407       drwxr-xr-x  9 krish  staff   288 11 Jun 16:00 .
   408       drwxr-xr-x  8 krish  staff   256 11 Jun 15:58 ..
   409       drwxr-xr-x  3 krish  staff    96 11 Jun 16:00 0
   410       drwxr-xr-x  3 krish  staff    96 11 Jun 16:00 1
   411       drwxr-xr-x  3 krish  staff    96 11 Jun 16:00 2
   412       drwxr-xr-x  3 krish  staff    96 11 Jun 16:00 3
   413       drwxr-xr-x  3 krish  staff    96 11 Jun 16:00 4
   414       -rwxr-xr-x  1 krish  staff  1878 11 Jun 16:00 genesis.json
   415       -rwxr-xr-x  1 krish  staff   832 11 Jun 16:00 static-nodes.json
   416      ```
   417      
   418  5. Update `static-nodes.json` to include the intended IP and port numbers of all initial validator nodes. In `static-nodes.json`, you will see a different row for each node. For the rest of the installation guide, row Y refers to node Y and row 1 is assumed to correspond to the lead node
   419      ```
   420      $ cat static-nodes.json
   421      .... update the IP and port numbers as give below...
   422      [
   423      	"enode://dd333ec28f0a8910c92eb4d336461eea1c20803eed9cf2c056557f986e720f8e693605bba2f4e8f289b1162e5ac7c80c914c7178130711e393ca76abc1d92f57@127.0.0.1:30300?discport=0",
   424      	"enode://1bb6be462f27e56f901c3fcb2d53a9273565f48e5d354c08f0c044405b29291b405b9f5aa027f3a75f9b058cb43e2f54719f15316979a0e5a2b760fff4631998@127.0.0.1:30301?discport=0",
   425      	"enode://0df02e94a3befc0683780d898119d3b675e5942c1a2f9ad47d35b4e6ccaf395cd71ec089fcf1d616748bf9871f91e5e3d29c1cf6f8f81de1b279082a104f619d@127.0.0.1:30302?discport=0",
   426      	"enode://3fe0ff0dd2730eaac7b6b379bdb51215b5831f4f48fa54a24a0298ad5ba8c2a332442948d53f4cd4fd28f373089a35e806ef722eb045659910f96a1278120516@127.0.0.1:30303?discport=0",
   427      	"enode://e53e92e5a51ac2685b0406d0d3c62288b53831c3b0f492b9dc4bc40334783702cfa74c49b836efa2761edde33a3282704273b2453537b855e7a4aeadcccdb43e@127.0.0.1:30304?discport=0"
   428      ]
   429      ```
   430  6. In each node's working directory, create a data directory called `data`, and inside `data` create the `geth` directory
   431      ```
   432      $ cd ..
   433      $ mkdir -p node0/data/geth
   434      $ mkdir -p node1/data/geth
   435      $ mkdir -p node2/data/geth
   436      $ mkdir -p node3/data/geth
   437      $ mkdir -p node4/data/geth
   438      ```
   439  7. Now we will generate initial accounts for any of the nodes in the required node's working directory. The resulting public account address printed in the terminal should be recorded. Repeat as many times as necessary. A set of funded accounts may be required depending what you are trying to accomplish
   440      ```
   441      $ geth --datadir node0/data account new
   442      INFO [06-11|16:05:53.672] Maximum peer count                       ETH=25 LES=0 total=25
   443      Your new account is locked with a password. Please give a password. Do not forget this password.
   444      Passphrase: 
   445      Repeat passphrase: 
   446      Address: {8fc817d90f179b0b627c2ecbcc1d1b0fcd13ddbd}
   447      $ geth --datadir node1/data account new
   448      INFO [06-11|16:06:34.529] Maximum peer count                       ETH=25 LES=0 total=25
   449      Your new account is locked with a password. Please give a password. Do not forget this password.
   450      Passphrase: 
   451      Repeat passphrase: 
   452      Address: {dce8adeef16a45d94be5e7804df6d35db834d94a}
   453      $ geth --datadir node2/data account new
   454      INFO [06-11|16:06:54.365] Maximum peer count                       ETH=25 LES=0 total=25
   455      Your new account is locked with a password. Please give a password. Do not forget this password.
   456      Passphrase: 
   457      Repeat passphrase: 
   458      Address: {65a3ab6d4cf23395f544833831fc5d42a0b2b43a}
   459      ```
   460  8. To add accounts to the initial block, edit the `genesis.json` file in the lead node's working directory and update the `alloc` field with the account(s) that were generated at previous step
   461      ```
   462      $ vim node0/genesis.json
   463      .... update the accounts under 'alloc'
   464      {
   465          "config": {
   466              "chainId": 10,
   467              "eip150Block": 1,
   468              "eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
   469              "eip155Block": 1,
   470              "eip158Block": 1,
   471              "byzantiumBlock": 1,
   472              "istanbul": {
   473                  "epoch": 30000,
   474                  "policy": 0
   475              },
   476              "isQuorum": true
   477          },
   478          "nonce": "0x0",
   479          "timestamp": "0x5cffc201",
   480          "extraData": "0x0000000000000000000000000000000000000000000000000000000000000000f8aff869944c1ccd426833b9782729a212c857f2f03b7b4c0d94189d23d201b03ae1cf9113672df29a5d672aefa39444b07d2c28b8ed8f02b45bd84ac7d9051b3349e694c1056df7c02b6f1a353052eaf0533cc7cb743b52947ae555d0f6faad7930434abdaac2274fd86ab516b8410000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c0",
   481          "gasLimit": "0xe0000000",
   482          "difficulty": "0x1",
   483          "mixHash": "0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365",
   484          "coinbase": "0x0000000000000000000000000000000000000000",
   485          "alloc": {
   486              "8fc817d90f179b0b627c2ecbcc1d1b0fcd13ddbd": {
   487                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   488              },
   489              "dce8adeef16a45d94be5e7804df6d35db834d94a": {
   490                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   491              },
   492              "65a3ab6d4cf23395f544833831fc5d42a0b2b43a": {
   493                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   494              }
   495          },
   496          "number": "0x0",
   497          "gasUsed": "0x0",
   498          "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
   499      }
   500      ```
   501  9. Next we need to distribute the files created in part 4, which currently reside in the lead node's working directory, to all other nodes. To do so, place `genesis.json` in the working directory of all nodes, place `static-nodes.json` in the data folder of each node and place `X/nodekey` in node (X-1)'s `data/geth` directory
   502      ```
   503      $ cp node0/genesis.json node1
   504      $ cp node0/genesis.json node2
   505      $ cp node0/genesis.json node3
   506      $ cp node0/genesis.json node4
   507      $ cp node0/static-nodes.json node0/data/
   508      $ cp node0/static-nodes.json node1/data/
   509      $ cp node0/static-nodes.json node2/data/
   510      $ cp node0/static-nodes.json node3/data/
   511      $ cp node0/static-nodes.json node4/data/
   512      $ cp node0/0/nodekey node0/data/geth
   513      $ cp node0/1/nodekey node1/data/geth
   514      $ cp node0/2/nodekey node2/data/geth
   515      $ cp node0/3/nodekey node3/data/geth
   516      $ cp node0/4/nodekey node4/data/geth
   517      ```
   518  10. Switch into working directory of lead node and initialize it. Repeat for every working directory X created in step 3. *The resulting hash given by executing `geth init` must match for every node*
   519      ```
   520      $ cd node0
   521      $ geth --datadir data init genesis.json
   522      INFO [06-11|16:14:11.883] Maximum peer count                       ETH=25 LES=0 total=25
   523      INFO [06-11|16:14:11.894] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node0/data/geth/chaindata cache=16 handles=16
   524      INFO [06-11|16:14:11.896] Writing custom genesis block 
   525      INFO [06-11|16:14:11.897] Persisted trie from memory database      nodes=6 size=1.01kB time=76.665µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   526      INFO [06-11|16:14:11.897] Successfully wrote genesis state         database=chaindata                                                  hash=b992be…533db7
   527      INFO [06-11|16:14:11.897] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node0/data/geth/lightchaindata cache=16 handles=16
   528      INFO [06-11|16:14:11.898] Writing custom genesis block 
   529      INFO [06-11|16:14:11.898] Persisted trie from memory database      nodes=6 size=1.01kB time=54.929µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   530      INFO [06-11|16:14:11.898] Successfully wrote genesis state         database=lightchaindata                                                  hash=b992be…533db7
   531      $
   532      $ cd ..
   533      $ cd node1
   534      $ geth --datadir data init genesis.json
   535      INFO [06-11|16:14:24.814] Maximum peer count                       ETH=25 LES=0 total=25
   536      INFO [06-11|16:14:24.824] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node1/data/geth/chaindata cache=16 handles=16
   537      INFO [06-11|16:14:24.831] Writing custom genesis block 
   538      INFO [06-11|16:14:24.831] Persisted trie from memory database      nodes=6 size=1.01kB time=82.799µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   539      INFO [06-11|16:14:24.832] Successfully wrote genesis state         database=chaindata                                                  hash=b992be…533db7
   540      INFO [06-11|16:14:24.832] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node1/data/geth/lightchaindata cache=16 handles=16
   541      INFO [06-11|16:14:24.833] Writing custom genesis block 
   542      INFO [06-11|16:14:24.833] Persisted trie from memory database      nodes=6 size=1.01kB time=52.828µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   543      INFO [06-11|16:14:24.834] Successfully wrote genesis state         database=lightchaindata                                                  hash=b992be…533db7
   544      $
   545      $ cd ..
   546      $ cd node2
   547      $ geth --datadir data init genesis.json
   548      INFO [06-11|16:14:35.246] Maximum peer count                       ETH=25 LES=0 total=25
   549      INFO [06-11|16:14:35.257] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node2/data/geth/chaindata cache=16 handles=16
   550      INFO [06-11|16:14:35.264] Writing custom genesis block 
   551      INFO [06-11|16:14:35.265] Persisted trie from memory database      nodes=6 size=1.01kB time=124.91µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   552      INFO [06-11|16:14:35.265] Successfully wrote genesis state         database=chaindata                                                  hash=b992be…533db7
   553      INFO [06-11|16:14:35.265] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node2/data/geth/lightchaindata cache=16 handles=16
   554      INFO [06-11|16:14:35.267] Writing custom genesis block 
   555      INFO [06-11|16:14:35.268] Persisted trie from memory database      nodes=6 size=1.01kB time=85.504µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   556      INFO [06-11|16:14:35.268] Successfully wrote genesis state         database=lightchaindata                                                  hash=b992be…533db7
   557      $ cd ../node3
   558      $ geth --datadir data init genesis.json
   559      INFO [06-11|16:14:42.168] Maximum peer count                       ETH=25 LES=0 total=25
   560      INFO [06-11|16:14:42.178] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node3/data/geth/chaindata cache=16 handles=16
   561      INFO [06-11|16:14:42.186] Writing custom genesis block 
   562      INFO [06-11|16:14:42.186] Persisted trie from memory database      nodes=6 size=1.01kB time=124.611µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   563      INFO [06-11|16:14:42.187] Successfully wrote genesis state         database=chaindata                                                  hash=b992be…533db7
   564      INFO [06-11|16:14:42.187] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node3/data/geth/lightchaindata cache=16 handles=16
   565      INFO [06-11|16:14:42.189] Writing custom genesis block 
   566      INFO [06-11|16:14:42.189] Persisted trie from memory database      nodes=6 size=1.01kB time=80.973µs  gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   567      INFO [06-11|16:14:42.189] Successfully wrote genesis state         database=lightchaindata                                                  hash=b992be…533db7
   568      $ cd ../node4
   569      $ geth --datadir data init genesis.json
   570      INFO [06-11|16:14:48.737] Maximum peer count                       ETH=25 LES=0 total=25
   571      INFO [06-11|16:14:48.747] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node4/data/geth/chaindata cache=16 handles=16
   572      INFO [06-11|16:14:48.749] Writing custom genesis block 
   573      INFO [06-11|16:14:48.749] Persisted trie from memory database      nodes=6 size=1.01kB time=71.213µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   574      INFO [06-11|16:14:48.750] Successfully wrote genesis state         database=chaindata                                                  hash=b992be…533db7
   575      INFO [06-11|16:14:48.750] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node4/data/geth/lightchaindata cache=16 handles=16
   576      INFO [06-11|16:14:48.751] Writing custom genesis block 
   577      INFO [06-11|16:14:48.751] Persisted trie from memory database      nodes=6 size=1.01kB time=53.773µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   578      INFO [06-11|16:14:48.751] Successfully wrote genesis state         database=lightchaindata                                                  hash=b992be…533db7
   579      $ cd..
   580      ```
   581  
   582  11. Start all nodes by first creating a script and running it.
   583      ```
   584      $ vim startall.sh
   585      .... paste below. The port numbers should match the port number for each node decided on in step 5
   586      #!/bin/bash
   587      cd node0
   588      PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --emitcheckpoints --port 30300 2>>node.log &
   589      
   590      
   591      cd ../node1
   592      PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --emitcheckpoints --port 30301 2>>node.log &
   593      
   594      cd ../node2
   595      PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22002 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --emitcheckpoints --port 30302 2>>node.log &
   596       
   597      cd ../node3
   598      PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22003 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --emitcheckpoints --port 30303 2>>node.log &
   599      
   600      cd ../node4
   601      PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22004 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --emitcheckpoints --port 30304 2>>node.log &
   602      $
   603      See if the any geth nodes are running.
   604      $ ps | grep geth
   605      Kill geth processes
   606      $ killall -INT geth
   607      $
   608      $ chmod +x startall.sh
   609      $ ./startall.sh
   610      $ ps
   611        PID TTY           TIME CMD
   612      10554 ttys000    0:00.11 -bash
   613      21829 ttys001    0:00.03 -bash
   614       9125 ttys002    0:00.82 -bash
   615      36432 ttys002    0:00.19 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,
   616      36433 ttys002    0:00.18 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,
   617      36434 ttys002    0:00.19 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22002 --rpcapi admin,
   618      36435 ttys002    0:00.19 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22003 --rpcapi admin,
   619      36436 ttys002    0:00.19 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22004 --rpcapi admin,
   620      $ 
   621      ```
   622      
   623      !!! note
   624          This configuration starts Quorum without privacy support as could be evidenced in prefix `PRIVATE_CONFIG=ignore`, please see below sections on [how to enable privacy with privacy transaction managers](#adding-privacy-transaction-manager).
   625          Please note that istanbul-tools may be used to generate X number of nodes, more information is available in the [docs](https://github.com/jpmorganchase/istanbul-tools).
   626  
   627      Your node is now operational and you may attach to it with `geth attach node0/data/geth.ipc`. 
   628      
   629  ### Adding additional validator
   630  1. Create a working directory for the new node that needs to be added
   631      ```
   632      $ mkdir node5
   633      ```
   634  2. Change into the working directory for the new node and run `istanbul setup --num 1 --verbose --quorum --save`. This will generate the validator details including Address, NodeInfo and genesis.json
   635      ```
   636      $ cd node5
   637      $ ../istanbul-tools/build/bin/istanbul setup --num 1 --verbose --quorum --save
   638      validators
   639      {
   640      	"Address": "0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",
   641      	"Nodekey": "25b47a49ef08f888c04f30417363e6c6bc33e739147b2f8b5377b3168f9f7435",
   642      	"NodeInfo": "enode://273eaf48591ce0e77c800b3e6465811d6d2f924c4dcaae016c2c7375256d17876c3e05f91839b741fe12350da0b5a741da4e30f39553fe8790f88503c64f6ef9@0.0.0.0:30303?discport=0"
   643      }
   644      
   645      
   646      
   647      genesis.json
   648      {
   649          "config": {
   650              "chainId": 10,
   651              "eip150Block": 1,
   652              "eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
   653              "eip155Block": 1,
   654              "eip158Block": 1,
   655              "byzantiumBlock": 1,
   656              "istanbul": {
   657                  "epoch": 30000,
   658                  "policy": 0
   659              },
   660              "isQuorum": true
   661          },
   662          "nonce": "0x0",
   663          "timestamp": "0x5cffc942",
   664          "extraData": "0x0000000000000000000000000000000000000000000000000000000000000000f85ad5942aabbc1bb9bacef60a09764d1a1f4f04a47885c1b8410000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c0",
   665          "gasLimit": "0xe0000000",
   666          "difficulty": "0x1",
   667          "mixHash": "0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365",
   668          "coinbase": "0x0000000000000000000000000000000000000000",
   669          "alloc": {
   670              "2aabbc1bb9bacef60a09764d1a1f4f04a47885c1": {
   671                  "balance": "0x446c3b15f9926687d2c40534fdb564000000000000"
   672              }
   673          },
   674          "number": "0x0",
   675          "gasUsed": "0x0",
   676          "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
   677      }
   678      ```
   679      
   680  3. Copy the address of the validator and run `istanbul.propose(<address>, true)` from more than half the number of current validators.
   681      ```
   682      $ cd ..
   683      $ geth attach node0/data/geth.ipc
   684      $ geth attach node0/data/geth.ipc
   685      Welcome to the Geth JavaScript console!
   686      
   687      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   688      coinbase: 0x4c1ccd426833b9782729a212c857f2f03b7b4c0d
   689      at block: 137 (Tue, 11 Jun 2019 16:32:47 BST)
   690       datadir: /Users/krish/fromscratchistanbul/node0/data
   691       modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   692      
   693      > istanbul.getValidators()
   694      ["0x189d23d201b03ae1cf9113672df29a5d672aefa3", "0x44b07d2c28b8ed8f02b45bd84ac7d9051b3349e6", "0x4c1ccd426833b9782729a212c857f2f03b7b4c0d", "0x7ae555d0f6faad7930434abdaac2274fd86ab516", "0xc1056df7c02b6f1a353052eaf0533cc7cb743b52"]
   695      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",true)
   696      null
   697      $
   698      $
   699      $ geth attach node1/data/geth.ipc
   700      Welcome to the Geth JavaScript console!
   701      
   702      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   703      coinbase: 0x189d23d201b03ae1cf9113672df29a5d672aefa3
   704      at block: 176 (Tue, 11 Jun 2019 16:36:02 BST)
   705       datadir: /Users/krish/fromscratchistanbul/node1/data
   706       modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   707      
   708      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",true)
   709      null
   710      $
   711      $
   712      $ geth attach node2/data/geth.ipc
   713      Welcome to the Geth JavaScript console!
   714      
   715      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   716      coinbase: 0x44b07d2c28b8ed8f02b45bd84ac7d9051b3349e6
   717      at block: 179 (Tue, 11 Jun 2019 16:36:17 BST)
   718       datadir: /Users/krish/fromscratchistanbul/node2/data
   719       modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   720      
   721      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",true)
   722      null
   723      $
   724      $
   725      $ geth attach node3/data/geth.ipc
   726      Welcome to the Geth JavaScript console!
   727      
   728      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   729      coinbase: 0xc1056df7c02b6f1a353052eaf0533cc7cb743b52
   730      at block: 181 (Tue, 11 Jun 2019 16:36:27 BST)
   731       datadir: /Users/krish/fromscratchistanbul/node3/data
   732       modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   733      
   734      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",true)
   735      ```
   736      
   737  4. Verify that the new validator has been added to the list of validators by running `istanbul.getValidators()`
   738      ```
   739      ... you can see below command now displays 6 node address as validators.
   740      > istanbul.getValidators()
   741      ["0x189d23d201b03ae1cf9113672df29a5d672aefa3", "0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1", "0x44b07d2c28b8ed8f02b45bd84ac7d9051b3349e6", "0x4c1ccd426833b9782729a212c857f2f03b7b4c0d", "0x7ae555d0f6faad7930434abdaac2274fd86ab516", "0xc1056df7c02b6f1a353052eaf0533cc7cb743b52"]    
   742      ```
   743  5. Copy `static-nodes.json` and genesis.json from the existing chain. `static-nodes.json` should be placed into new nodes data dir
   744      ```
   745      $ cd node5
   746      $ mkdir -p data/geth
   747      $ cp ../node0/static-nodes.json data
   748      $ cp ../node0/genesis.json .
   749      ```
   750  6. Edit `static-nodes.json` and add the new validators node info to the end of the file. New validators node info can be got from the output of `istanbul setup --num 1 --verbose --quorum --save` command that was run in step 2. Update the IP address and port of the node info to match the IP address of the validator and port you want to use.
   751      ```
   752      $ vim data/static-nodes.json
   753      ...add new validate nodes details with correct IP and port details
   754      [
   755      	"enode://dd333ec28f0a8910c92eb4d336461eea1c20803eed9cf2c056557f986e720f8e693605bba2f4e8f289b1162e5ac7c80c914c7178130711e393ca76abc1d92f57@127.0.0.1:30300?discport=0",
   756      	"enode://1bb6be462f27e56f901c3fcb2d53a9273565f48e5d354c08f0c044405b29291b405b9f5aa027f3a75f9b058cb43e2f54719f15316979a0e5a2b760fff4631998@127.0.0.1:30301?discport=0",
   757      	"enode://0df02e94a3befc0683780d898119d3b675e5942c1a2f9ad47d35b4e6ccaf395cd71ec089fcf1d616748bf9871f91e5e3d29c1cf6f8f81de1b279082a104f619d@127.0.0.1:30302?discport=0",
   758      	"enode://3fe0ff0dd2730eaac7b6b379bdb51215b5831f4f48fa54a24a0298ad5ba8c2a332442948d53f4cd4fd28f373089a35e806ef722eb045659910f96a1278120516@127.0.0.1:30303?discport=0",
   759      	"enode://e53e92e5a51ac2685b0406d0d3c62288b53831c3b0f492b9dc4bc40334783702cfa74c49b836efa2761edde33a3282704273b2453537b855e7a4aeadcccdb43e@127.0.0.1:30304?discport=0",
   760          "enode://273eaf48591ce0e77c800b3e6465811d6d2f924c4dcaae016c2c7375256d17876c3e05f91839b741fe12350da0b5a741da4e30f39553fe8790f88503c64f6ef9@127.0.0.1:30305?discport=0"
   761      ]
   762  
   763      ```
   764  7. Copy the nodekey that was generated by `istanbul setup` command to the `geth` directory inside the working directory
   765      ```
   766      $ cp 0/nodekey data/geth
   767      ```
   768  8. Generate one or more accounts for this node and take down the account address.
   769      ```
   770      $ geth --datadir data account new
   771      INFO [06-12|17:45:11.116] Maximum peer count                       ETH=25 LES=0 total=25
   772      Your new account is locked with a password. Please give a password. Do not forget this password.
   773      Passphrase: 
   774      Repeat passphrase: 
   775      Address: {37922bce824bca2f3206ea53dd50d173b368b572}
   776      ```
   777  9. Initialize new node with below command
   778      ```
   779      $ geth --datadir data init genesis.json
   780      INFO [06-11|16:42:27.120] Maximum peer count                       ETH=25 LES=0 total=25
   781      INFO [06-11|16:42:27.130] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node5/data/geth/chaindata cache=16 handles=16
   782      INFO [06-11|16:42:27.138] Writing custom genesis block 
   783      INFO [06-11|16:42:27.138] Persisted trie from memory database      nodes=6 size=1.01kB time=163.024µs gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   784      INFO [06-11|16:42:27.139] Successfully wrote genesis state         database=chaindata                                                  hash=b992be…533db7
   785      INFO [06-11|16:42:27.139] Allocated cache and file handles         database=/Users/krish/fromscratchistanbul/node5/data/geth/lightchaindata cache=16 handles=16
   786      INFO [06-11|16:42:27.141] Writing custom genesis block 
   787      INFO [06-11|16:42:27.142] Persisted trie from memory database      nodes=6 size=1.01kB time=94.57µs   gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
   788      INFO [06-11|16:42:27.142] Successfully wrote genesis state         database=lightchaindata                                                  hash=b992be…533db7
   789      $
   790      
   791      ```
   792  10. Start the node by first creating below script and executing it:
   793      ```
   794      $ cd ..
   795      $ cp startall.sh start6.sh
   796      $ vim start6.sh
   797      ... paste below and update IP and port number matching for this node decided on step 6
   798      #!/bin/bash
   799      cd node5
   800      PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22005 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --emitcheckpoints --port 30305 2>>node.log &
   801      $
   802      $ ./start6.sh 
   803      $
   804      $ ps
   805        PID TTY           TIME CMD
   806      10554 ttys000    0:00.11 -bash
   807      21829 ttys001    0:00.03 -bash
   808       9125 ttys002    0:00.93 -bash
   809      36432 ttys002    0:24.48 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,
   810      36433 ttys002    0:23.36 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,
   811      36434 ttys002    0:24.32 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22002 --rpcapi admin,
   812      36435 ttys002    0:24.21 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22003 --rpcapi admin,
   813      36436 ttys002    0:24.17 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22004 --rpcapi admin,
   814      36485 ttys002    0:00.15 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22005 --rpcapi admin,
   815      36455 ttys003    0:00.04 -bash
   816      36467 ttys003    0:00.32 geth attach node3/data/geth.ipc
   817      ```
   818  
   819  ### Removing validator
   820  1. Attach to a running validator and run `istanbul.getValidators()` and identify the address of the validator that needs to be removed
   821      ```
   822      $ geth attach node0/data/geth.ipc
   823      Welcome to the Geth JavaScript console!
   824      
   825      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   826      coinbase: 0xc1056df7c02b6f1a353052eaf0533cc7cb743b52
   827      at block: 181 (Tue, 11 Jun 2019 16:36:27 BST)
   828       datadir: /Users/krish/fromscratchistanbul/node0/data
   829       modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   830      > istanbul.getValidators()
   831      ["0x189d23d201b03ae1cf9113672df29a5d672aefa3", "0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1", "0x44b07d2c28b8ed8f02b45bd84ac7d9051b3349e6", "0x4c1ccd426833b9782729a212c857f2f03b7b4c0d", "0x7ae555d0f6faad7930434abdaac2274fd86ab516", "0xc1056df7c02b6f1a353052eaf0533cc7cb743b52"]
   832      ```
   833  2. Run `istanbul.propose(<address>, false)` by passing the address of the validator that needs to be removed from more than half current validators
   834      ```
   835      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",false)
   836      null
   837      $
   838      $ geth attach node1/data/geth.ipc
   839      Welcome to the Geth JavaScript console!
   840          
   841      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   842      coinbase: 0xc1056df7c02b6f1a353052eaf0533cc7cb743b52
   843      at block: 181 (Tue, 11 Jun 2019 16:36:27 BST)
   844      datadir: /Users/krish/fromscratchistanbul/node1/data
   845      modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   846      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",false)
   847      null
   848      $
   849      $ geth attach node2/data/geth.ipc
   850      Welcome to the Geth JavaScript console!
   851          
   852      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   853      coinbase: 0xc1056df7c02b6f1a353052eaf0533cc7cb743b52
   854      at block: 181 (Tue, 11 Jun 2019 16:36:27 BST)
   855      datadir: /Users/krish/fromscratchistanbul/node2/data
   856      modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   857      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",false)
   858      null 
   859      $
   860      $ geth attach node3/data/geth.ipc
   861      Welcome to the Geth JavaScript console!
   862          
   863      instance: Geth/v1.8.18-stable-bb88608c(quorum-v2.2.3)/darwin-amd64/go1.10.2
   864      coinbase: 0xc1056df7c02b6f1a353052eaf0533cc7cb743b52
   865      at block: 181 (Tue, 11 Jun 2019 16:36:27 BST)
   866      datadir: /Users/krish/fromscratchistanbul/node3/data
   867      modules: admin:1.0 debug:1.0 eth:1.0 istanbul:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
   868      > istanbul.propose("0x2aabbc1bb9bacef60a09764d1a1f4f04a47885c1",false)
   869      null       
   870      ```
   871  3. Verify that the validator has been removed by running `istanbul.getValidators()`
   872      ```
   873      > istanbul.getValidators()
   874      ["0x189d23d201b03ae1cf9113672df29a5d672aefa3", "0x44b07d2c28b8ed8f02b45bd84ac7d9051b3349e6", "0x4c1ccd426833b9782729a212c857f2f03b7b4c0d", "0x7ae555d0f6faad7930434abdaac2274fd86ab516", "0xc1056df7c02b6f1a353052eaf0533cc7cb743b52"]
   875      ```
   876  4. Stop the `geth` process corresponding to the validator that was removed.
   877      ```
   878      $ ps
   879        PID TTY           TIME CMD
   880      10554 ttys000    0:00.11 -bash
   881      21829 ttys001    0:00.03 -bash
   882       9125 ttys002    0:00.94 -bash
   883      36432 ttys002    0:31.93 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,
   884      36433 ttys002    0:30.75 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,
   885      36434 ttys002    0:31.72 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22002 --rpcapi admin,
   886      36435 ttys002    0:31.65 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22003 --rpcapi admin,
   887      36436 ttys002    0:31.63 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22004 --rpcapi admin,
   888      36485 ttys002    0:06.86 geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22005 --rpcapi admin,
   889      36455 ttys003    0:00.05 -bash
   890      36493 ttys003    0:00.22 geth attach node4/data/geth.ipc
   891      $ kill 36485
   892      ```
   893  
   894  ### Adding non-validator node
   895  
   896  Same instructions as adding validator node **excluding** step 3 which proposes the node as validator.
   897  
   898  ### Removing non-validator node
   899  
   900  Just execute **step 4** instruction from removing a validator node.
   901  
   902  
   903  ## Adding privacy transaction manager
   904  ### Tessera
   905  1. Build Quorum and install [Tessera](https://github.com/jpmorganchase/tessera/releases) as described in the [Installing](../Installing) section. Ensure that PATH contains geth and bootnode. Be aware of the location of the `tessera.jar` release file
   906      ```
   907      $ git clone https://github.com/jpmorganchase/quorum.git
   908      $ cd quorum
   909      $ make all
   910      $ export PATH=$(pwd)/build/bin:$PATH
   911      $ cd ..
   912      .... copy tessera jar to your desired destination and rename it as tessera
   913      $ mv tessera-app-0.9.2-app.jar tessera.jar
   914      
   915      ```
   916  2. Generate new keys using `java -jar /path-to-tessera/tessera.jar -keygen -filename new-node-1`
   917      ```
   918      $ mkdir new-node-1t
   919      $ cd new-node-1t
   920      $ java -jar ../tessera.jar -keygen -filename new-node-1
   921      Enter a password if you want to lock the private key or leave blank
   922      
   923      Please re-enter the password (or lack of) to confirm
   924      
   925      10:32:51.256 [main] INFO  com.quorum.tessera.nacl.jnacl.Jnacl - Generating new keypair...
   926      10:32:51.279 [main] INFO  com.quorum.tessera.nacl.jnacl.Jnacl - Generated public key PublicKey[pnesVeDgs805ZPbnulzC5wokDzpdN7CeYKVUBXup/W4=] and private key REDACTED
   927      10:32:51.624 [main] INFO  c.q.t.k.generation.FileKeyGenerator - Saved public key to /Users/krish/fromscratch/new-node-1t/new-node-1.pub
   928      10:32:51.624 [main] INFO  c.q.t.k.generation.FileKeyGenerator - Saved private key to /Users/krish/fromscratch/new-node-1t/new-node-1.key
   929      ```
   930  3. Create new configuration file with newly generated keys referenced. Name it `config.json` as done in this example
   931      ```
   932      vim config.json
   933      {
   934         "useWhiteList": false,
   935         "jdbc": {
   936             "username": "sa",
   937             "password": "",
   938             "url": "jdbc:h2:/yourpath/new-node-1t/db1;MODE=Oracle;TRACE_LEVEL_SYSTEM_OUT=0",
   939             "autoCreateTables": true
   940         },
   941         "serverConfigs":[
   942             {
   943                 "app":"ThirdParty",
   944                 "enabled": true,
   945                 "serverAddress": "http://localhost:9081",
   946                 "communicationType" : "REST"
   947             },
   948             {
   949                 "app":"Q2T",
   950                 "enabled": true,
   951                  "serverAddress":"unix:/yourpath/new-node-1t/tm.ipc",
   952                 "communicationType" : "REST"
   953             },
   954             {
   955                 "app":"P2P",
   956                 "enabled": true,
   957                 "serverAddress":"http://localhost:9001",
   958                 "sslConfig": {
   959                     "tls": "OFF"
   960                 },
   961                 "communicationType" : "REST"
   962             }
   963         ],
   964         "peer": [
   965             {
   966                 "url": "http://localhost:9001"
   967             },
   968             {
   969                 "url": "http://localhost:9003"
   970             }
   971         ],
   972         "keys": {
   973             "passwords": [],
   974             "keyData": [
   975                 {
   976                     "privateKeyPath": "/yourpath/new-node-1t/new-node-1.key",
   977                     "publicKeyPath": "/yourpath/new-node-1t/new-node-1.pub"
   978                 }
   979             ]
   980         },
   981         "alwaysSendTo": []
   982      }
   983      ```
   984      
   985  4. If you want to start another Tessera node, please repeat step 2 & step 3
   986      ```
   987      $ cd ..
   988      $ mkdir new-node-2t
   989      $ cd new-node-2t
   990      $ java -jar ../tessera.jar -keygen -filename new-node-2
   991      Enter a password if you want to lock the private key or leave blank
   992      
   993      Please re-enter the password (or lack of) to confirm
   994      
   995      10:45:02.567 [main] INFO  com.quorum.tessera.nacl.jnacl.Jnacl - Generating new keypair...
   996      10:45:02.585 [main] INFO  com.quorum.tessera.nacl.jnacl.Jnacl - Generated public key PublicKey[AeggpVlVsi+rxD6h9tcq/8qL/MsjyipUnkj1nvNPgTU=] and private key REDACTED
   997      10:45:02.926 [main] INFO  c.q.t.k.generation.FileKeyGenerator - Saved public key to /Users/krish/fromscratch/new-node-2t/new-node-2.pub
   998      10:45:02.926 [main] INFO  c.q.t.k.generation.FileKeyGenerator - Saved private key to /Users/krish/fromscratch/new-node-2t/new-node-2.key
   999      $
  1000      $ cp ../new-node-1t/config.json .
  1001      $ vim config.json
  1002      .... paste below
  1003      {
  1004         "useWhiteList": false,
  1005         "jdbc": {
  1006             "username": "sa",
  1007             "password": "",
  1008             "url": "jdbc:h2:yourpath/new-node-2t/db1;MODE=Oracle;TRACE_LEVEL_SYSTEM_OUT=0",
  1009             "autoCreateTables": true
  1010         },
  1011         "serverConfigs":[
  1012             {
  1013                 "app":"ThirdParty",
  1014                 "enabled": true,
  1015                 "serverAddress": "http://localhost:9083",
  1016                 "communicationType" : "REST"
  1017             },
  1018             {
  1019                 "app":"Q2T",
  1020                 "enabled": true,
  1021                  "serverAddress":"unix:/yourpath/new-node-2t/tm.ipc",
  1022                 "communicationType" : "REST"
  1023             },
  1024             {
  1025                 "app":"P2P",
  1026                 "enabled": true,
  1027                 "serverAddress":"http://localhost:9003",
  1028                 "sslConfig": {
  1029                     "tls": "OFF"
  1030                 },
  1031                 "communicationType" : "REST"
  1032             }
  1033         ],
  1034         "peer": [
  1035             {
  1036                 "url": "http://localhost:9001"
  1037             },
  1038             {
  1039                 "url": "http://localhost:9003"
  1040             }
  1041         ],
  1042         "keys": {
  1043             "passwords": [],
  1044             "keyData": [
  1045                 {
  1046                     "privateKeyPath": "/yourpath/new-node-2t/new-node-2.key",
  1047                     "publicKeyPath": "/yourpath/new-node-2t/new-node-2.pub"
  1048                 }
  1049             ]
  1050         },
  1051         "alwaysSendTo": []
  1052      }
  1053      
  1054      ```
  1055  5. Start your Tessera nodes and send then into background
  1056      ```
  1057      $ java -jar ../tessera.jar -configfile config.json >> tessera.log 2>&1 &
  1058      [1] 38064
  1059      $ cd ../new-node-1t
  1060      $ java -jar ../tessera.jar -configfile config.json >> tessera.log 2>&1 &
  1061      [2] 38076
  1062      $ ps
  1063        PID TTY           TIME CMD
  1064      10554 ttys000    0:00.12 -bash
  1065      38234 ttys000    1:15.31 /usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python /usr/local/bin/mkdocs serve
  1066      21829 ttys001    0:00.18 -bash
  1067       9125 ttys002    0:01.52 -bash
  1068      38072 ttys002    1:18.42 /usr/bin/java -jar ../tessera.jar -configfile config.json
  1069      38076 ttys002    1:15.86 /usr/bin/java -jar ../tessera.jar -configfile config.json
  1070      ```
  1071  
  1072  6. Start Quorum nodes attached to running Tessera nodes from above and send it to background
  1073      ```
  1074      ... update the start scripts to include PRIVATE_CONFIG
  1075      $cd ..
  1076      $vim startnode1.sh
  1077      ... paste below
  1078      #!/bin/bash
  1079      PRIVATE_CONFIG=/yourpath/new-node-1t/tm.ipc nohup geth --datadir new-node-1 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50000 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,raft --emitcheckpoints --port 21000 >> node.log 2>&1 &
  1080      $vim startnode2.sh
  1081      ... paste below
  1082      #!/bin/bash
  1083      PRIVATE_CONFIG=/yourpath/new-node-2t/tm.ipc nohup geth --datadir new-node-2 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50001 --raftjoinexisting 2 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,raft --emitcheckpoints --port 21001 2>>node2.log &
  1084      $ ./startnode1.sh
  1085      $ ./startnode2.sh
  1086      $ ps
  1087        PID TTY           TIME CMD
  1088      10554 ttys000    0:00.12 -bash
  1089      21829 ttys001    0:00.18 -bash
  1090       9125 ttys002    0:01.49 -bash
  1091      38072 ttys002    0:48.92 /usr/bin/java -jar ../tessera.jar -configfile config.json
  1092      38076 ttys002    0:47.60 /usr/bin/java -jar ../tessera.jar -configfile config.json
  1093      38183 ttys002    0:08.15 geth --datadir new-node-1 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50000 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,db,eth,debug,miner,net,shh,txpoo
  1094      38204 ttys002    0:00.19 geth --datadir new-node-2 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50001 --raftjoinexisting 2 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debu
  1095      36455 ttys003    0:00.15 -bash  
  1096      ```
  1097  
  1098      !!! note
  1099          Tessera IPC bridge will be over a file name defined in your `config.json`, usually named `tm.ipc` as evidenced in prefix `PRIVATE_CONFIG=tm.ipc`. Your node is now able to send and receive private transactions, advertised public node key will be in the `new-node-1.pub` file. Tessera offers a lot of configuration flexibility, please refer [Configuration](../../Privacy/Tessera/Configuration/Configuration%20Overview) section under Tessera for complete and up to date configuration options.
  1100          
  1101      Your node is now operational and you may attach to it with `geth attach new-node-1/geth.ipc` to send private transactions. 
  1102      ```
  1103      $ vim private-contract.js
  1104      ... create simple private contract to send transaction from new-node-1 private for new-node-2's tessera public key created in step 4
  1105      a = eth.accounts[0]
  1106      web3.eth.defaultAccount = a;
  1107      
  1108      // abi and bytecode generated from simplestorage.sol:
  1109      // > solcjs --bin --abi simplestorage.sol
  1110      var abi = [{"constant":true,"inputs":[],"name":"storedData","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"x","type":"uint256"}],"name":"set","outputs":[],"payable":false,"type":"function"},{"constant":true,"inputs":[],"name":"get","outputs":[{"name":"retVal","type":"uint256"}],"payable":false,"type":"function"},{"inputs":[{"name":"initVal","type":"uint256"}],"payable":false,"type":"constructor"}];
  1111      
  1112      var bytecode = "0x6060604052341561000f57600080fd5b604051602080610149833981016040528080519060200190919050505b806000819055505b505b610104806100456000396000f30060606040526000357c0100000000000000000000000000000000000000000000000000000000900463ffffffff1680632a1afcd914605157806360fe47b11460775780636d4ce63c146097575b600080fd5b3415605b57600080fd5b606160bd565b6040518082815260200191505060405180910390f35b3415608157600080fd5b6095600480803590602001909190505060c3565b005b341560a157600080fd5b60a760ce565b6040518082815260200191505060405180910390f35b60005481565b806000819055505b50565b6000805490505b905600a165627a7a72305820d5851baab720bba574474de3d09dbeaabc674a15f4dd93b974908476542c23f00029";
  1113      
  1114      var simpleContract = web3.eth.contract(abi);
  1115      var simple = simpleContract.new(42, {from:web3.eth.accounts[0], data: bytecode, gas: 0x47b760, privateFor: ["AeggpVlVsi+rxD6h9tcq/8qL/MsjyipUnkj1nvNPgTU="]}, function(e, contract) {
  1116      	if (e) {
  1117      		console.log("err creating contract", e);
  1118      	} else {
  1119      		if (!contract.address) {
  1120      			console.log("Contract transaction send: TransactionHash: " + contract.transactionHash + " waiting to be mined...");
  1121      		} else {
  1122      			console.log("Contract mined! Address: " + contract.address);
  1123      			console.log(contract);
  1124      		}
  1125      	}
  1126      });
  1127      $
  1128      $
  1129      ```
  1130      
  1131      !!! note
  1132          Account opened in geth by default are locked, so please unlock account first before sending transaction
  1133          
  1134      ```
  1135      $
  1136      $ geth attach new-node-1/geth.ipc
  1137      > eth.accounts
  1138      ["0x23214cd88f46865207fa1d2a69971a37cdbf526a"]
  1139      > personal.unlockAccount("0x23214cd88f46865207fa1d2a69971a37cdbf526a");
  1140      Unlock account 0x23214cd88f46865207fa1d2a69971a37cdbf526a
  1141      Passphrase: 
  1142      true
  1143      > loadScript("private-contract.js")
  1144      Contract transaction send: TransactionHash: 0x7d3bf7612ef10c71f752e881648b7c8c4eee3223acab151ee0652447790836a6 waiting to be mined...
  1145      true
  1146      > Contract mined! Address: 0xe975e1e11c5268b1efcbf39b7ee3cf7b8dc85fd7
  1147      ```
  1148  
  1149      You have **successfully** sent a private transaction from node 1 to node 2 !!
  1150      
  1151      !!! note
  1152          if you do not have an valid public key in the array in private-contract.js, you will see the following error when the script in loaded.
  1153      ```
  1154      > loadScript("private-contract.js")
  1155      err creating contract Error: Non-200 status code: &{Status:400 Bad Request StatusCode:400 Proto:HTTP/1.1 ProtoMajor:1      ProtoMinor:1 Header:map[Date:[Mon, 17 Jun 2019 15:23:53 GMT] Content-Type:[text/plain] Content-Length:[73] Server:[Jetty(9.4.z-SNAPSHOT)]] Body:0xc01997a580 ContentLength:73 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc019788200 TLS:<nil>}
  1156      ```    
  1157  
  1158  ### Constellation
  1159  1. Build Quorum and install [Constellation](https://github.com/jpmorganchase/constellation/releases) as described in the [Installing](../Installing) section. Ensure that PATH contains geth, bootnode, and constellation-node binaries
  1160  2. Generate new keys with `constellation-node --generatekeys=new-node-1`
  1161  3. Start your constellation node and send it into background with `constellation-node --url=https://127.0.0.1:9001/ --port=9001 --workdir=. --socket=tm.ipc --publickeys=new-node-1.pub --privatekeys=new-node-1.key --othernodes=https://127.0.0.1:9001/ >> constellation.log 2>&1 &`
  1162  4. Start your node and send it into background with `PRIVATE_CONFIG=tm.ipc nohup geth --datadir new-node-1 --nodiscover --verbosity 5 --networkid 31337 --raft --raftport 50000 --rpc --rpcaddr 0.0.0.0 --rpcport 22000 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,raft --emitcheckpoints --port 21000 2>>node.log &`
  1163  
  1164      !!! note
  1165          Constellation IPC bridge will be over a file name defined in your configuration: in above step #3 see option `--socket=file-name.ipc`. Your node is now able to send and receive private transactions, advertised public node key will be in the `new-node-1.pub` file.
  1166  
  1167      Your node is now operational and you may attach to it with `geth attach new-node-1/geth.ipc`. 
  1168  
  1169  
  1170  ## Enabling permissioned configuration
  1171  Quorum ships with a permissions system based on a custom whitelist. Detailed documentation is available in [Network Permissioning](../../Security/Framework/Quorum%20Network%20Security/Nodes/Permissioning/Network%20Permissioning).